269 research outputs found
Formalism and judgement in assurance cases
This position paper deals with the tension between the desire for sound and auditable assurance cases and the current ubiquitous reliance on expert judgement. I believe that the use of expert judgement, though inevitable, needs to be much more cautious and disciplined than it usually is. The idea of assurance “cases ” owes its appeal to an awareness that all too often critical decisions are made in ways that are difficult to justify or even to explain, leaving the doubt (for the decision makers as well as other interested parties) that the decision may be unsound. By building a well-structured “case ” we would wish to allow proper scrutiny of the evidence and assumptions used, and of the arguments that link them to support a decision. A
Software reliability and dependability: a roadmap
Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t
Recommended from our members
Predicting Software Defects Based on Cognitive Error Theories
As the primary cause of software defects, human error is the key to understanding and perhaps to predicting and preventing software defects. However, little research has been done to forecast software defects based on defects' cognitive nature. This paper proposes an idea for predicting software defects by applying the current scientific understanding of human error mechanisms. This new prediction method is based on the main causal mechanism underlying software defects: an error-prone scenario triggers a sequence of human error modes. Preliminary evidence for supporting this idea is presented
Recommended from our members
Human Factors Standards and the Hard Human Factor Problems: Observations on Medical Usability Standards
With increasing variety and sophistication of computer-based medical devices, and more diverse users and use environments, usability is essential, especially to ensure safety. Usability standards and guidelines play an important role. We reviewed several, focusing on the IEC 62366 and 60601 sets. It is plausible that these standards have reduced risks for patients, but we raise concerns regarding: (1) complex design trade-offs that are not addressed, (2) a focus on user interface design (e.g., making alarms audible) to the detriment of other human factors (e.g., ensuring users actually act upon alarms they hear), and (3) some definitions and scope restrictions that may create “blind spots”. We highlight potential related risks, e.g. that clear directives on “easier to understand” risks, though useful, may preclude mitigating other, more “difficult” ones; but ask to what extent these negative effects can be avoided by standard writers, given objective constraints. Our critique is motivated by current research and incident reports, and considers standards from other domains and countries. It is meant to highlight problems, relevant to designers, standards committees, and human factors researchers, and to trigger discussion about the potential and limits of standards
Recommended from our members
On the use of testability measures for dependability assessment
Program “testability” is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliability” requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence level”. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Recommended from our members
Assessing the Risk due to Software Faults: Estimates of Failure Rate versus Evidence of Perfection.
In the debate over the assessment of software reliability (or safety), as applied to critical software, two extreme positions can be discerned: the ‘statistical’ position, which requires that the claims of reliability be supported by statistical inference from realistic testing or operation, and the ‘perfectionist’ position, which requires convincing indications that the software is free from defects. These two positions naturally lead to requiring different kinds of supporting evidence, and actually to stating the dependability requirements in different ways, not allowing any direct comparison. There is often confusion about the relationship between statements about software failure rates and about software correctness, and about which evidence can support either kind of statement. This note clarifies the meaning of the two kinds of statement and how they relate to the probability of failure-free operation, and discusses their practical merits, especially for high required reliability or safety
Recommended from our members
Comparing the effectiveness of testing methods in improving programs: the effect of variations in program quality
We compare the efficacy of different testing methods for improving the reliability of software. Specifically, we use modelling to compare “operational” testing, in which test cases are chosen according to their probability of occurring in actual use of the software, against “debug” testing methods, in which the testers look for test cases which they consider likely to cause failure, or that satisfy some coverage criterion. We base our comparisons on the reliability reached by the program at the end of testing. Differently from previous studies, we consider the probability distribution of the achieved reliability, and thus the probability of satisfying specific requirements, rather than just the average reliability achieved. We take account of two sources of variation. The variation between the actual test histories that are possible for a given program and a given test method: and the fact that different programs start testing with different faults and initial reliability levels. By necessity, we use very simplified models of reality. Yet, we can show some interesting conclusions with important practical consequences. In general, there are stronger arguments in favor of operational testing than previous studies have show
Rigorously assessing software reliability and safety
This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available
- …